Bounds on the prediction error of penalized least squares estimators with convex penalty

نویسندگان

  • Pierre C. Bellec
  • Alexandre B. Tsybakov
چکیده

This paper considers the penalized least squares estimator with arbitrary convex penalty. When the observation noise is Gaussian, we show that the prediction error is a subgaussian random variable concentrated around its median. We apply this concentration property to derive sharp oracle inequalities for the prediction error of the LASSO, the group LASSO and the SLOPE estimators, both in probability and in expectation. In contrast to the previous work on the LASSO type methods, our oracle inequalities in probability are obtained at any confidence level for estimators with tuning parameters that do not depend on the confidence level. This is also the reason why we are able to establish sparsity oracle bounds in expectation for the LASSO type estimators, while the previously known techniques did not allow for the control of the expected risk. In addition, we show that the concentration rate in the oracle inequalities is better than it was commonly understood before.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators

For the problem of high-dimensional sparse linear regression, it is known that an `0-based estimator can achieve a 1/n “fast” rate for prediction error without any conditions on the design matrix, whereas in the absence of restrictive conditions on the design matrix, popular polynomialtime methods only guarantee the 1/ √ n “slow” rate. In this paper, we show that the slow rate is intrinsic to a...

متن کامل

Penalized least squares, model selection, convex hull classes and neural nets

We develop improved risk bounds for function estimation with models such as single hidden layer neural nets, using a penalized least squares criterion to select the size of the model. These results show the estimator achieves the best order of balance between approximation error and penalty relative to the sample size. Bounds are given both for the case that the target function is in the convex...

متن کامل

Towards the study of least squares estimators with convex penalty

Penalized least squares estimation is a popular technique in high-dimensional statistics. It includes such methods as the LASSO, the group LASSO, and the nuclear norm penalized least squares. The existing theory of these methods is not fully satisfying since it allows one to prove oracle inequalities with fixed high probability only for the estimators depending on this probability. Furthermore,...

متن کامل

Sparse structures : statistical theory and practice , Bristol , June 2010

Alexandre Tsybakov (Paris VI, France) Estimation of high-dimensional low rank matrices Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m× T -matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the number mT of unknown entries can be much larger than the sample size N . Motivated by several application...

متن کامل

ESTIMATION OF HIGH - DIMENSIONAL LOW - RANK MATRICES 1 By Angelika Rohde and Alexandre

Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m×T -matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the numbermT of unknown entries can be much larger than the sample size N . Motivated by several applications, we consider estimation of matrix A under the assumption that it has small rank. This c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016